Introduction


Definitions

  • Algorithm: is a method or a process adopted in order to solve a problem.
  • Problem: task to be performed, consist of inputs and outputs.
  • Program: Is an instance of algorithm in a certain programming language.
  • Algorithm Analysis: The process of evaluating the efficiency of an algorithm in terms of time and space complexity. It helps in determining the performance of an algorithm.

Find

represents the time complexity function of an algorithm. It describes the amount of time an algorithm takes to run as a function of the input size .

Simple fast way to find

  1. Statements: Order of 1 int n = 5;
  2. Conditions: choose the max if(n && log(n)) => O(n)
  3. Loops: Order of body number of iterations
  4. constants are removed.
  5. the biggest order is the dominant
    This method doesn't work in all cases .. so we need another method

Summation-equation method to find

Examples:

for(int i = 1; i <= n; i++)
	print i;

// or

for(int i = n; i >= 1; i--)
	print i;

for(int i = 1; i <= n; i*=2)
	print i;

// or 

for(int i = n; i >= 1; i/=2)
	print i;

for(int i = 1; i <= n; i++)
	for(int j = 1; j <= n; j++)
		print i;

for(int i = 1; i <= n; i*=3)
	for(int j = 1; j <= n; j++)
		print i;

for(int i = 1; i <= n; i*=3)
	for(int j = 1; j <= n; j*=3)
		print i;

for(int i = 1; i <= n; i++)
	for(int j = i + 1; j <= n; j++)
		print i;


Asympotitic Notations, oh, omegam, theta

Asymptotic notations describe the growth rate of an algorithm's running time as the input size approaches infinity. They help us compare algorithms based on their efficiency, ignoring constant factors and lower-order terms.

means: الحد صاحب أكبر أس في المعادلة

1. Big-O Notation

  • Represents the worst-case / upper-bound of an algorithm.
  • It tells us the maximum time an algorithm can take for large .
  • Formally, if there exist constants and such that:

Example: If , then: and

2. Omega Notation

  • Represents the best-case / lower-bound of an algorithm.
  • It tells us the minimum time an algorithm will take.
  • Formally, if there exist constants and such that:

Example: If , then:

3. Theta Notation

  • Represents the average-case / tight-bound (both upper and lower).
  • It tells us that an algorithm runs in exactly a certain order of time.
  • Formally, if there exist constants and such that:

Example: If , then:

Relationship Between

  • If , then it equals both and .
  • If = , then

Examples:

How to find and for and

For

  1. Write down all the constants (with thier signs) multiplied with .
  2. To find make the two sides equal and solve for n.

For

  1. Write down with its constant.
  2. To find make the two sides equal and solve for n.

Different type of questions

Pasted image 20250402213534.png

Pasted image 20250402213152.png

Little oh and little omega

زيها زي الO والاوميجا العاديين بس مفيهاش كلمة يساوي
Pasted image 20250402213851.png
Pasted image 20250402214059.png

Important method

if you're asked is

  1. put it in this form
  2. if it is

Example 1

Example 2


Groth Rate

Groth Rate VS Time

الـ Groth rate عكس ال time

Pasted image 20250402214551.png


Recursion Equation

Any recursive function has time in the following format:

Examples:

int fact(int n){
	if(n == 0) return 0;
	else return fact(n - 1);
}

// or 

int fact(int n){
	if(n == 0) return 0;
	else return n * fact(n - 1);
}

الـ n دي مجرد رقم ثابت بينضرب في ناتج الفانكشن مبيأثرش فالوقت

int fact(int n){
	if(n == 0) return 0;
	else return fact(n/2);
}

int fact(int n){
	if(n == 0) return 0;
	else return fact(n/2) + fact(n/2);
}

int fact(int n){
	if(n == 0) return 0;
	else return fact(n/2) + fact(3n/2);
}

int fact(int n){
	if(n == 0){
		for(int i = 1; i <= n; i++) {
			cout << i;
		}
		return 1;
	}
	else return fact(n - 1) + fact(n - 2);
}

Master method to solve recurrence

بتحل بعض انواع ال Recursion مش بيشتغل غير على المعادلات اللي بالشكل التالي
نبدأ نقارن كالتالي:

Examples:

Pasted image 20250402230346.pngPasted image 20250402230359.png

Another formula

if they're not equal we can't use this method.

Pasted image 20250402224416.png

Advanced Problem

VS--CourseAlgorithmAnalysisDesignCourseFromAtoZArabicUdemy-12’26”.jpg


Tree method to solve recurrence

The recursion tree method is a visual way to analyze recurrence relations by expanding them level by level and summing up the work done at each level. This helps us find the overall complexity of an algorithm.

Steps in the Tree Method

  1. Expand the recurrence tree level by level.
  2. Find the number of nodes and the work done at each level.
  3. Sum up the work done across all levels.
  4. Determine the height of the tree and compute the final sum.

Example 1: Simple Recurrence

Each call to makes two recursive calls to , and at each level, the extra work is .

Pasted image 20250402234857.png

بما إن كل مره بنقسم على 2 وهنوصل لـ 0 فالاخر فا الـ height =
انما لو كنا بنطرح 1 او رقم ثابت فا ال hight =

Summing the work done at each level:

Dealing with Geometric Serieses

كل عنصر بيساوي اللي قبله مضروب في ثابت

Examples

Pasted image 20250403001519.png
Pasted image 20250403001343.png

Dealing with Arithmatic Serieses

كل عنصر بيساوي اللي قبله مجموع عليه ثابت, هنحاول نحطها على شكل قانون ال sum

Pasted image 20250403001735.png
Pasted image 20250403153222.png


Substitution method to solve recurrence

The substitution method is a powerful technique for solving recurrence relations by:

  1. Guessing a solution (using intuition or pattern recognition).
  2. Proving the guess using mathematical induction.

Steps in the Substitution Method

  1. Guess the solution using pattern expansion or intuition.
  2. Use induction to prove that the guess holds.
  3. Solve for constants if necessary.

Example 1: Solve

Step 1: Expand the Recurrence

Expanding the recurrence for a few levels:
After expansions:

Step 2: Identify Base Case

The recursion stops when , meaning , so: .

Step 3: Compute the Total Work

The summation simplifies:
Final Answer:

Example 2:

Step 1: Expand the Recurrence

Expanding for a few values:

Continuing this process, after steps:

Step 2: Find the Base Case

If we assume the base case is , then setting gives:

Step 3: Compute the Final Expression

Substituting :
Since , we get:
Final Answer:


Space Complexity

Space complexity measures the total memory used by an algorithm as a function of the input size .

هنا هنحسب عادي ولكن هنهتم فقط بالسطور اللي بخزن فيها داتا فالميموري, كمثال

int x = 5; // 1
int y = 7 // 2
int z = x + y; // 3
cout << z;
// Total = O(1)

الموضوع بسيط ولكن فيه شوية تريكات فال recursive functions

Memory Stack

مكان فالميموري بيخزن الfunction والstatic variables شغال بمبدأ الـLast in first out
شرح تفصيلي فالفيديو الخاص بالدكتور اضغط هنا

Each recursive call adds a new function frame to the call stack, increasing space usage. For a recurrence like:

Auxiliary Space vs. Total Space

  • Auxiliary Space: Extra memory used excluding input storage.
  • Total Space: Input storage + Auxiliary space.

Examples:

void printArray(int arr[], int n) {
    for (int i = 0; i < n; i++) { // 1
        cout << arr[i] << " ";
    }
}

void recursiveFunc(int n) {
    if (n == 0) return;
    recursiveFunc(n - 1);
}

int fib(int n) {
    if (n <= 1) return n;
    return fib(n-1) + fib(n-2);
}

The recursive Fibonacci function makes two calls, but they are not executed simultaneously. The recursion first fully evaluates before calling .


Linear Search

The algorithm checks each element in an array one by one.

int linearSearch(int arr[], int n, int target) {
    // Loop through each element of the array
    for (int i = 0; i < n; i++) {
        // Check if the current element matches the target
        if (arr[i] == target) {
            return i;  // Return the index if found
        }
    }
    return -1;  // Return -1 if the target is not found
}

Time Complexity

  • Worst-Case Time Complexity:
    In the worst case (if the target is not present or is at the very end), the algorithm makes comparisons.
  • Average-Case Time Complexity:
    On average, it might inspect about half the elements before finding the target, but in Big-O notation, this still simplifies to .
  • Best-Case Time Complexity:
    If the target is the first element, it will be found in time.

Space Complexity


Binary Search

Binary search is an efficient algorithm for finding a target element in a sorted array.

it works by repeatedly dividing the search interval in half. If the target value is less than the value in the middle of the interval, the algorithm continues on the lower half. Otherwise, it continues on the upper half. This process repeats until the target is found or the search interval is empty.

int binarySearchRecursive(int arr[], int low, int high, int target) {
    if (low > high) 
        return -1;
    
    int mid = (high + mid) / 2;  

    if (arr[mid] == target) 
        return mid;  

    if (arr[mid] < target)  
    return binarySearchRecursive(arr, mid + 1, high, target);
    return binarySearchRecursive(arr, low, mid - 1, target);  
}

Time Complexity:

  • Worst-case / Average-case: , because the search space is halved each time.
  • Best-case: , if the target is the middle element in the first iteration.

Space Complexity


Insertion Sort

Insertion Sort is a simple and efficient sorting algorithm.
It builds the sorted sequence one element at a time by picking the next element and placing it in its correct position within the already sorted part.

How Insertion Sort Works

  1. Start from the second element (index 1) because a single element (index 0) is already sorted.
  2. Pick the next element and compare it with the elements before it.
  3. Shift elements to the right to make space for the picked element.
  4. Insert the picked element in its correct position.
  5. Repeat the process for all elements.
void insertionSort(vector<int>& arr) {
    int n = arr.size();
    for (int i = 1; i < n; i++) {
        int key = arr[i], j = i - 1;
        while (j >= 0 && arr[j] > key){
            arr[j + 1] = arr[j];
            j--;
        }
        arr[j + 1] = key;
    }
}

Time Complexity

  • Best Case (Already Sorted):
  • Worst Case (Reverse Sorted):
  • Average Case:

Space Complexity

Properties

  • in place -> reorders the elements in the same array.
  • stable -> elements with same values remain in the same order.

Selection Sort

Selection Sort is a simple comparison-based sorting algorithm. The idea is to repeatedly select the minimum (or maximum) element from the unsorted part of the array and move it to the correct sorted position.

Steps

  1. Start from index 0.
  2. Find the minimum element from arr[i] to arr[n-1].
  3. Swap that minimum element with arr[i].
  4. Move to the next index (i + 1) and repeat until the array is sorted.
void selectionSort(vector<int>& arr) {
    int n = arr.size();
    for (int i = 0; i < n - 1; i++) {
        int minIdx = i;
        for (int j = i + 1; j < n; j++)
            if (arr[j] < arr[minIdx]) minIdx = j;
        swap(arr[i], arr[minIdx]);
    }
}

Time Complexity

  • Best Case (Already Sorted):
  • Worst Case (Reverse Sorted):
  • Average Case:

Space Complexity

Properties

  • in place -> reorders the elements in the same array.
  • not stable -> elements with same values dont't remain in the same order.
    • can be converted into stable by shifting elements instead of swapping

Quick Sort

Quick Sort follows the Divide and Conquer strategy:

  1. Pick a "pivot" element.
  2. Partition the array so that:
    • All elements less than the pivot go to the left.
    • All elements greater than the pivot go to the right.
  3. Recursively apply the same logic to the left and right subarrays.

Code

int partition(vector<int>& arr, int low, int high) {
    int pivot = arr[high], i = low - 1;
    for (int j = low; j < high; j++) {
        if (arr[j] < pivot) {
			i++;
            swap(arr[i], arr[j]);
        }
    }
    swap(arr[i + 1], arr[high]);
    return i + 1;
}
void quickSort(vector<int>& arr, int low, int high) {
    if (low < high) {
        int pi = partition(arr, low, high);
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}

Time Complexity

  • Best Case (Already Sorted):
  • Worst Case (Reverse Sorted):
  • Average Case:

Space Complexity

Properties

  • in place -> reorders the elements in the same array.
  • not stable -> elements with same values don't remain in the same order.

Merge Sort

Merge Sort is a Divide and Conquer algorithm:

  1. Divide the array into two halves.
  2. Recursively sort each half.
  3. Merge the sorted halves into a single sorted array.

Code

void merge(vector<int>& arr, int l, int m, int r) {
    int n1 = m - l + 1, n2 = r - m;
    vector<int> L(n1), R(n2);
    for (int i = 0; i < n1; i++) L[i] = arr[l + i];
    for (int j = 0; j < n2; j++) R[j] = arr[m + 1 + j];

    int i = 0, j = 0, k = l;
    while (i < n1 && j < n2)
        arr[k++] = (L[i] <= R[j]) ? L[i++] : R[j++];
    while (i < n1) arr[k++] = L[i++];
    while (j < n2) arr[k++] = R[j++];
}
void mergeSort(vector<int>& arr, int l, int r) {
    if (l < r) {
        int m = l + (r - l) / 2;
        mergeSort(arr, l, m);
        mergeSort(arr, m + 1, r);
        merge(arr, l, m, r);
    }
}

Time Complexity

  • Best Case (Already Sorted):
  • Worst Case (Reverse Sorted):
  • Average Case:

Space Complexity

Properties

  • out of place -> reorders the elements in other arrays.
  • stable -> elements with same values remain in the same order.

Count Sort

Counting Sort is a non-comparison sorting algorithm ideal for sorting integers within a known, limited range.
Instead of comparing elements, it:

  1. Counts how many times each value occurs.
  2. Uses these counts to determine the positions of each element in the sorted output.
void countSort(vector<int>& arr) {
    int max_val = *max_element(arr.begin(), arr.end());

    // Step 1: Create and initialize count array
    vector<int> count(max_val + 1, 0);

    // Step 2: Store count of each element
    for (int num : arr)
        count[num]++;

    // Step 3: rebuild the sorted array
    int k = 0;
    for (int i = 0; i <= max_val; i++) {
        for (int j = 0; j < count[i]; j++){
            arr[k++] = i;
        }
    }
}

Time Complexity

  • Best Case (Already Sorted):
  • Worst Case (Reverse Sorted):
  • Average Case:
    : range of input.

Space Complexity

Properties

  • out of place -> reorders the elements in other arrays.
  • stable -> elements with same values remain in the same order.

Comparison

Searching algorithms

Algorithm Time Complexity (Best / Avg / Worst) Space In-Place Stable Notes
Linear Search / / Works on unsorted data
Binary Search / / Requires sorted data

Sorting algorithms

Algorithm Time Complexity (Best / Avg / Worst) Space In-Place Stable Notes
Selection Sort / / Simple, always
Insertion Sort / / Fast on nearly sorted arrays
Quick Sort / / ² Fast in practice, not stable
Merge Sort / / Very predictable, uses extra space
Count Sort / / Only for integers / small range

Algorithm Desing Techniques

Algorithm design techniques are systematic approaches to solving computational problems efficiently.
يعني لما يجيلك سؤال أو مشكلة في البرمجة، مش لازم تبدأ تحل كده من دماغك وخلاص… لأ، فيه طرق معينة، كل طريقة ليها طريقة تفكير بتخليك توصل لحل سريع ومنظّم.

Brute Force

هو أسهل وأبسط طريقة لحل أي مشكلة… بتجرّب كل الحلول الممكنة لحد ما تلاقي الصح

Characteristics:
  • Straightforward and direct approach
  • Explores all possible solutions
  • Often serves as a baseline for comparison يعني تعمله الأول وبعدين تطوّر خوارزمية أحسن وتشوف الفرق
  • Generally simple to implement سهل فالكتابة
  • Usually inefficient for large inputs مش عملي مع الداتا اللي حجمها كبير
When to use:
  • Problem size is small
  • No obvious optimized algorithm exists
  • Quick implementation is required
  • For algorithm verification

تستخدمه لو حجم المشكلة عندك صغير ومفيهاش عناصر كتير، او لو مفيش حل افضل واسرع من ده، او لو عايز تكتب الكود بتاعه بسرعه، او لو عندك حل تاني بطريقة تانية وعايز تتاكد انه صح فا بتستخدم البروت فورس عشان تقارن بين الناتج بتاع الالجورذم التانية وبين الناتج بتاعه

Limitations:
  • Exponential or factorial time complexity for many problems
  • Impractical for large inputs

الtime complexity بتبقى غالبا سيئة جدا جدا

Divide and Conquer

Characteristics:
  • Breaks problem into smaller, similar subproblems
  • Solves subproblems recursively
  • Combines solutions to solve the original problem
Key steps:
  1. Divide: Break the problem into smaller subproblems
  2. Conquer: Solve subproblems recursively
  3. Combine: Merge solutions of subproblems
When to use:
  • Problems with recursive structure
  • When subproblems are independent <- مهم
  • For parallelizable computations
Advantages:
  • Often yields efficient algorithms
  • Amenable to parallel processing

تنفع تتنفذ بشكل متوازي، أو ممكن نوزّع شغلها على أكتر من معالج/كور (CPU cores) في نفس الوقت.

  • Natural for recursive problems

معناها إنك لما تواجه مشكلة كبيرة، بدل ما تحلها مرة واحدة، تقسمها لمشاكل أصغر (دي مرحلة الـ "Divide")، وتحل كل واحدة لوحدها غالبًا باستخدام Recursion (دي مرحلة الـ "Conquer")، وبعد كده تدمج نتايج الحلول الصغيرة دي مع بعض علشان تطلع حل المشكلة الأصلية (ودي مرحلة الـ "Combine"). الطريقة دي ممتازة لو المشكلة ممكن تتقسم وأجزاءها مش متداخلة، وبيكون فيها كفاءة عالية خاصة في المسائل اللي ينفع نحلها بشكل متوازي. أمثلة معروفة زي Merge Sort وBinary Search بيعتمدوا على نفس الفكرة دي

Greedy Algorithms

Characteristics:
  • Makes locally optimal choices at each step
  • Hopes that these choices lead to a globally optimal solution
  • Never reconsiders previous choices
Key components:
  • Optimal substructure: An optimal solution contains optimal solutions to subproblems
  • Greedy choice property: A globally optimal solution can be found by making locally optimal choices
When to use:
  • Problems where local optimum leads to global optimum
  • When you can prove the greedy approach is correct
Limitations:
  • Does not work for all problems
  • May not find the global optimum

هي طريقة حل بتشتغل خطوة بخطوة، وفي كل خطوة بتختار أحسن حل ظاهر قدامك دلوقتي، على أمل إن الاختيارات دي كلها في الآخر توصلنا لأفضل حل نهائي. الميزة إنها سريعة وبسيطة، بس العيب إنها مش بترجع تصلّح نفسها، يعني لو أخدت قرار غلط خلاص بتكمل عليه علشان كده مش بتنفع مع كل المشاكل، إلا لو المشكلة فيها خاصيتين مهمين، أولًا إن الحل النهائي بيتكوّن من حلول مثالية لأجزاء صغيرة (دي بنسميها optimal substructure)، وثانيا إن الحل الأمثل النهائي ممكن نوصله عن طريق اختيارات مثالية محلية (greedy choice property)

Dynamic Programming

Characteristics:
  • Breaks down problem into overlapping subproblems
  • Stores results of subproblems to avoid redundant calculations
  • Builds solution bottom-up or top-down (memoization)
Key components:
  • Optimal substructure
  • Overlapping subproblems

ال Overlapping Subproblems يعني وانت بتحل المشكلة بتلاقي إن في نفس الجزء بيتحسب أكتر من مرة والمشكلة فيها تكرار لحسابات حصلت قبل كده، وممكن نخزّن النتايج دي بدل ما نحسبها تاني ونبقى نستخدمها لما نحتاجها بعدين

When to use:
  • Problems with overlapping subproblems
  • When recursive solutions have exponential complexity
  • Optimization problems requiring optimal solution
Approaches:
  • Top-down (Memoization): Recursive approach with caching
  • Bottom-up (Tabulation): Iterative approach building from smallest subproblems

هي طريقة لحل مشاكل بتتكرر فيها نفس الخطوات كذا مرة، فبدل ما نعيد الحساب كل شوية، بنحسب الحاجة مرة واحدة ونخزّن النتيجة (دي فكرة "الذاكرة"). الطريقة دي بتشتغل كويس في المشاكل اللي فيها subproblems بتتكرر، فيه طريقتين نحل بيهم: يا إما Top-down (وده بالـ Recursion + Memoization، يعني بننزل للمشكلة الصغيرة ونرجع بالنتايج)، أو Bottom-up (وده بنبدأ من أصغر مشكلة ونطلع لفوق خطوة خطوة في جدول)

الDP بيبقى مفيد جدًا لما يكون الحل العادي (زي الـ Recursion) بياخد وقت كبير جدًا، زي في مشاكل الـ Fibonacci، knapsack، وlongest common subsequence

Example Fibonacci Numbers

Calculate the Fibonacci number, where , and for

Brute Force Approach (Recursive)
int fib(int n) {
	if(n == 0 || n == 1) return n;
	return fib(n-1) + fib(n-2);
}
  • Directly implement the recursive definition
  • No optimization, recalculate values even if previously computed
  • Time Complexity: - Exponential due to redundant calculations
  • Space Complexity: - Maximum depth of recursion stack
Divide and Conquer Approach

For Fibonacci, a pure divide and conquer approach would be similar to the brute force recursive solution.
However, we can use matrix exponentiation which is a more efficient divide and conquer strategy.

  • Use the mathematical property that Fibonacci numbers can be computed using matrix exponentiation
  • Divide the problem of calculating the power into smaller powers
  • Apply the divide and conquer approach to matrix exponentiation
  • Time Complexity: - Due to the efficiency of the divide and conquer approach for exponentiation
  • Space Complexity: - Due to recursion stack
Greedy Approach

The Fibonacci sequence doesn't naturally lend itself to a greedy approach since:

  1. There's no locally optimal choice to make at each step
  2. The problem requires knowing previous values rather than making independent choices
    A simple iterative calculation is often labeled as "greedy" but is more accurately just an iterative approach.
int fib(int n) {
	if(n == 0 || n == 1) return n;

	int a = 0, b = 1, temp;
	for (int i = 2; i <= n; i++) {
		temp = a + b;
		a = b;
		b = temp;
	}
	return temp;
}
  • Start with the base values
  • Iteratively compute the next Fibonacci number using the previous two
  • Only store the two most recent values
  • Time Complexity: - We perform n-1 iterations
  • Space Complexity: - We only store two variables regardless of input size
Dynamic Programming Approach
Top-down with Memoization
  1. Use recursion with memoization (caching)
  2. Store computed Fibonacci values to avoid redundant calculations
int arr[n + 1]; // -> initilize by -1 

int fib(int n) {
	if(n == 0 || n == 1) return n;

	if(arr[n] != -1) return arr[i];
	arr[n] = fib(n-1) + fib(n-2);
	return arr[n];
}
  • Before computing a value, check if it's already calculated
  • Store each calculated value for future use
  • Time Complexity: - Each Fibonacci number is calculated exactly once
  • Space Complexity: - For the memoization table plus recursion stack
Bottom-up with Tabulation
  1. Build a table of Fibonacci numbers from the bottom up
  2. Use previously computed values to calculate new ones
int fib(int n) {
	int dp[n + 1];
	dp[0] = 0;
	dp[1] = 1;
	for (int i = 2; i <= n; i++) {
		dp[i] = dp[i-1] + dp[i-2];
	}
	return arr[n];
}
  • Start from the base cases and iteratively build up the solution
  • Time Complexity: - We compute each Fibonacci number once
  • Space Complexity: - For the DP table
Space-Optimized Dynamic Programming

نفس ال iterative فوق، بس ده في حالة الfibonacci بس

for Fibonacci Problem

Pasted image 20250616112327.png

The choice of algorithm design technique

depends on the specific problem characteristics:

  1. Brute Force: Simple problems or when other techniques don't provide significant advantages.
  2. Divide and Conquer: Problems that can be broken into independent subproblems and combined.
  3. Greedy: Problems where local optimal choices lead to global optimal solution.
  4. Dynamic Programming: Problems with overlapping subproblems and optimal substructure.